Recently, evolutionary multitasking (EMT) has been successfully used in the field of high-dimensional classification. However, the generation of multiple tasks in the existing EMT-based feature selection (FS) methods is relatively simple, using only the Relief-F method to collect related features with similar importance into one task, which cannot provide more diversified tasks for knowledge transfer. Thus, this paper devises a new EMT algorithm for FS in high-dimensional classification, which first adopts different filtering methods to produce multiple tasks and then modifies a competitive swarm optimizer to efficiently solve these related tasks via knowledge transfer. First, a diversified multiple task generation method is designed based on multiple filtering methods, which generates several relevant low-dimensional FS tasks by eliminating irrelevant features. In this way, useful knowledge for solving simple and relevant tasks can be transferred to simplify and speed up the solution of the original high-dimensional FS task. Then, a competitive swarm optimizer is modified to simultaneously solve these relevant FS tasks by transferring useful knowledge among them. Numerous empirical results demonstrate that the proposed EMT-based FS method can obtain a better feature subset than several state-of-the-art FS methods on eighteen high-dimensional datasets.
translated by 谷歌翻译
Accurate spatial-temporal traffic flow forecasting is essential for helping traffic managers to take control measures and drivers to choose the optimal travel routes. Recently, graph convolutional networks (GCNs) have been widely used in traffic flow prediction owing to their powerful ability to capture spatial-temporal dependencies. The design of the spatial-temporal graph adjacency matrix is a key to the success of GCNs, and it is still an open question. This paper proposes reconstructing the binary adjacency matrix via tensor decomposition, and a traffic flow forecasting method is proposed. First, we reformulate the spatial-temporal fusion graph adjacency matrix into a three-way adjacency tensor. Then, we reconstructed the adjacency tensor via Tucker decomposition, wherein more informative and global spatial-temporal dependencies are encoded. Finally, a Spatial-temporal Synchronous Graph Convolutional module for localized spatial-temporal correlations learning and a Dilated Convolution module for global correlations learning are assembled to aggregate and learn the comprehensive spatial-temporal dependencies of the road network. Experimental results on four open-access datasets demonstrate that the proposed model outperforms state-of-the-art approaches in terms of the prediction performance and computational cost.
translated by 谷歌翻译
Recently, a surge of high-quality 3D-aware GANs have been proposed, which leverage the generative power of neural rendering. It is natural to associate 3D GANs with GAN inversion methods to project a real image into the generator's latent space, allowing free-view consistent synthesis and editing, referred as 3D GAN inversion. Although with the facial prior preserved in pre-trained 3D GANs, reconstructing a 3D portrait with only one monocular image is still an ill-pose problem. The straightforward application of 2D GAN inversion methods focuses on texture similarity only while ignoring the correctness of 3D geometry shapes. It may raise geometry collapse effects, especially when reconstructing a side face under an extreme pose. Besides, the synthetic results in novel views are prone to be blurry. In this work, we propose a novel method to promote 3D GAN inversion by introducing facial symmetry prior. We design a pipeline and constraints to make full use of the pseudo auxiliary view obtained via image flipping, which helps obtain a robust and reasonable geometry shape during the inversion process. To enhance texture fidelity in unobserved viewpoints, pseudo labels from depth-guided 3D warping can provide extra supervision. We design constraints aimed at filtering out conflict areas for optimization in asymmetric situations. Comprehensive quantitative and qualitative evaluations on image reconstruction and editing demonstrate the superiority of our method.
translated by 谷歌翻译
会话问题生成(CQG)是机器通过对话等人类(例如交互式阅读理解)的重要任务。与传统的单转交问题(SQG)相比,CQG更具挑战性的意义,即生成的问题不仅需要有意义,而且要与发生的对话历史保持一致。虽然先前的研究主要集中于如何建模对话的流量和对齐,但迄今为止,尚无对模型必需部分和历史的部分进行全面的研究。我们认为,缩短上下文和历史是至关重要的,因为它可以帮助该模型对对话的一致性进行更多优化。为此,我们提出了一个两阶段CQG框架COHS-CQG,该框架采用COHS模块来缩短输入的上下文和历史记录。特别是,COHS选择连续的句子,并根据其相关性得分通过顶级P策略转弯。我们的模型在答案感和答案环境中都可以在COQA上实现最先进的表演。
translated by 谷歌翻译
创伤性脑损伤(TBI)患者的脑网络分析对于其意识水平评估和预后评估至关重要,这需要分割某些意识相关的大脑区域。但是,由于很难收集TBI患者的手动注释的MR扫描,因此很难构建TBI分割模型。数据增强技术可用于缓解数据稀缺问题。但是,常规数据增强策略(例如空间和强度转化)无法模仿创伤性大脑中的变形和病变,这限制了后续分割任务的性能。为了解决这些问题,我们提出了一种名为TBIGA的新型医学图像授课模型,以通过配对的脑标签图合成TBI MR扫描。我们的TBIGAN方法的主要优势在于,它可以同时生成TBI图像和相应的标签映射,这在以前的医学图像的先前涂上方法中尚未实现。我们首先按照粗到细节的方式在边缘信息的指导下生成成分的图像,然后将合成强度图像用作标签上填充的先验。此外,我们引入了基于注册的模板增强管道,以增加合成图像对的多样性并增强数据增强能力。实验结果表明,提出的TBIGAN方法可以产生具有高质量和有效标签图的足够合成的TBI图像,这可以大大改善与替代方案相比的2D和3D创伤性脑部分割性能。
translated by 谷歌翻译
上下文信息对于各种计算机视觉任务至关重要,以前的作品通常设计插件模块和结构损失,以有效地提取和汇总全局上下文。这些方法利用优质标签来优化模型,但忽略了精细训练的特征也是宝贵的训练资源,可以将优选的分布引入硬像素(即错误分类的像素)。受到无监督范式的对比学习的启发,我们以监督的方式应用了对比度损失,并重新设计了损失功能,以抛弃无监督学习的刻板印象(例如,积极和负面的不平衡,对锚定计算的混淆)。为此,我们提出了阳性阴性相等的对比损失(PNE损失),这增加了阳性嵌入对锚的潜在影响,并同时对待阳性和阴性样本对。 PNE损失可以直接插入现有的语义细分框架中,并以可忽视的额外计算成本导致出色的性能。我们利用许多经典的分割方法(例如,DeepLabv3,Ocrnet,Upernet)和骨干(例如Resnet,Hrnet,Swin Transformer)进行全面的实验,并在两个基准数据集(例如,例如,例如,,例如城市景观和可可固定)。我们的代码将公开
translated by 谷歌翻译
隐式辐射功能作为重建和渲染3D场景的照片真实观点的强大场景表示形式出现。但是,这些表示的编辑性差。另一方面,诸如多边形网格之类的显式表示允许易于编辑,但不适合重建动态的人头中的准确细节,例如精细的面部特征,头发,牙齿,牙齿和眼睛。在这项工作中,我们提出了神经参数化(NEP),这是一种混合表示,提供了隐式和显式方法的优势。 NEP能够进行照片真实的渲染,同时允许对场景的几何形状和外观进行细粒度编辑。我们首先通过将3D几何形状参数化为2D纹理空间来解开几何形状和外观。我们通过引入显式线性变形层来启用几何编辑性。变形由一组稀疏的密钥点控制,可以明确和直观地移位以编辑几何形状。对于外观,我们开发了一个混合2D纹理,该纹理由明确的纹理图组成,以易于编辑和隐式视图以及时间相关的残差,以建模时间和视图变化。我们将我们的方法与几个重建和编辑基线进行比较。结果表明,NEP在保持高编辑性的同时达到了几乎相同的渲染精度。
translated by 谷歌翻译
本文研究了通过机器学习模型估计特征对特定实例预测的贡献的问题,以及功能对模型的总体贡献。特征(变量)对预测结果的因果效应反映了该特征对预测的贡献。一个挑战是,如果没有已知的因果图,就无法从数据中估算大多数现有的因果效应。在本文中,我们根据假设的理想实验定义了解释性因果效应。该定义给不可知论的解释带来了一些好处。首先,解释是透明的,具有因果关系。其次,解释性因果效应估计可以数据驱动。第三,因果效应既提供了特定预测的局部解释,又提供了一个全局解释,显示了一个特征在预测模型中的总体重要性。我们进一步提出了一种基于解释性因果效应来解释的方法和组合变量的方法。我们显示了对某些现实世界数据集的实验的定义和方法。
translated by 谷歌翻译
对新生儿的运动和姿势评估使经验丰富的儿科医生可以预测神经发育障碍,从而可以早期干预相关疾病。但是,大多数用于人类姿势估计方法的最新AI方法都集中在成年人上,缺乏公开基准的婴儿姿势估计。在本文中,我们通过提出婴儿姿势数据集和深度聚合视觉变压器来填补这一空白,以进行人姿势估计,该姿势估计引入了一个快速训练的完整变压器框架,而无需使用卷积操作在早期阶段提取功能。它将变压器 + MLP概括为特征图内的高分辨率深层聚集,从而在不同视力级别之间实现信息融合。我们在可可姿势数据集上预先训练,并将其应用于新发布的大规模婴儿姿势估计数据集。结果表明,凝集可以有效地学习不同分辨率之间的多尺度特征,并显着提高婴儿姿势估计的性能。我们表明,在婴儿姿势估计数据集中,凝集优于混合模型hrformer和tokenpose。此外,在可可瓣姿势估计上,我们的凝集表现优于0.8 AP。我们的代码可在github.com/szar-lab/aggpose上获得。
translated by 谷歌翻译
The click-through rate (CTR) prediction task is to predict whether a user will click on the recommended item. As mind-boggling amounts of data are produced online daily, accelerating CTR prediction model training is critical to ensuring an up-to-date model and reducing the training cost. One approach to increase the training speed is to apply large batch training. However, as shown in computer vision and natural language processing tasks, training with a large batch easily suffers from the loss of accuracy. Our experiments show that previous scaling rules fail in the training of CTR prediction neural networks. To tackle this problem, we first theoretically show that different frequencies of ids make it challenging to scale hyperparameters when scaling the batch size. To stabilize the training process in a large batch size setting, we develop the adaptive Column-wise Clipping (CowClip). It enables an easy and effective scaling rule for the embeddings, which keeps the learning rate unchanged and scales the L2 loss. We conduct extensive experiments with four CTR prediction networks on two real-world datasets and successfully scaled 128 times the original batch size without accuracy loss. In particular, for CTR prediction model DeepFM training on the Criteo dataset, our optimization framework enlarges the batch size from 1K to 128K with over 0.1% AUC improvement and reduces training time from 12 hours to 10 minutes on a single V100 GPU. Our code locates at https://github.com/bytedance/LargeBatchCTR.
translated by 谷歌翻译